1,879 research outputs found

    On the fast computation of the weight enumerator polynomial and the tt value of digital nets over finite abelian groups

    Full text link
    In this paper we introduce digital nets over finite abelian groups which contain digital nets over finite fields and certain rings as a special case. We prove a MacWilliams type identity for such digital nets. This identity can be used to compute the strict tt-value of a digital net over finite abelian groups. If the digital net has NN points in the ss dimensional unit cube [0,1]s[0,1]^s, then the tt-value can be computed in O(NslogN)\mathcal{O}(N s \log N) operations and the weight enumerator polynomial can be computed in O(Ns(logN)2)\mathcal{O}(N s (\log N)^2) operations, where operations mean arithmetic of integers. By precomputing some values the number of operations of computing the weight enumerator polynomial can be reduced further

    ttˉbbˉt\bar{t}b\bar{b} hadroproduction with massive bottom quarks with PowHel

    Full text link
    The associated production of top-antitop-bottom-antibottom quarks is a relevant irreducible background for Higgs boson analyses in the top-antitop-Higgs production channel, with Higgs decaying into a bottom-antibottom quark pair. We implement this process in the PowHel event generator, considering the bottom quarks as massive in all steps of the computation which involves hard-scattering matrix-elements in the 4-flavour number scheme combined with 4-flavour Parton Distribution Functions. Predictions with NLO QCD + Parton Shower accuracy, as obtained by PowHel + PYTHIA, are compared to those which resulted from a previous PowHel implementation with hard-scattering matrix-elements in the 5-flavour number scheme, considering as a baseline the example of a realistic analysis of top-antitop hadroproduction with additional bb-jet activity, performed by the CMS collaboration at the Large Hadron Collider.Comment: 9 pages, 6 figure

    Validating Network Value of Influencers by means of Explanations

    Full text link
    Recently, there has been significant interest in social influence analysis. One of the central problems in this area is the problem of identifying influencers, such that by convincing these users to perform a certain action (like buying a new product), a large number of other users get influenced to follow the action. The client of such an application is a marketer who would target these influencers for marketing a given new product, say by providing free samples or discounts. It is natural that before committing resources for targeting an influencer the marketer would be interested in validating the influence (or network value) of influencers returned. This requires digging deeper into such analytical questions as: who are their followers, on what actions (or products) they are influential, etc. However, the current approaches to identifying influencers largely work as a black box in this respect. The goal of this paper is to open up the black box, address these questions and provide informative and crisp explanations for validating the network value of influencers. We formulate the problem of providing explanations (called PROXI) as a discrete optimization problem of feature selection. We show that PROXI is not only NP-hard to solve exactly, it is NP-hard to approximate within any reasonable factor. Nevertheless, we show interesting properties of the objective function and develop an intuitive greedy heuristic. We perform detailed experimental analysis on two real world datasets - Twitter and Flixster, and show that our approach is useful in generating concise and insightful explanations of the influence distribution of users and that our greedy algorithm is effective and efficient with respect to several baselines

    Non-Gaussian Geostatistical Modeling using (skew) t Processes

    Get PDF
    We propose a new model for regression and dependence analysis when addressing spatial data with possibly heavy tails and an asymmetric marginal distribution. We first propose a stationary process with tt marginals obtained through scale mixing of a Gaussian process with an inverse square root process with Gamma marginals. We then generalize this construction by considering a skew-Gaussian process, thus obtaining a process with skew-t marginal distributions. For the proposed (skew) tt process we study the second-order and geometrical properties and in the tt case, we provide analytic expressions for the bivariate distribution. In an extensive simulation study, we investigate the use of the weighted pairwise likelihood as a method of estimation for the tt process. Moreover we compare the performance of the optimal linear predictor of the tt process versus the optimal Gaussian predictor. Finally, the effectiveness of our methodology is illustrated by analyzing a georeferenced dataset on maximum temperatures in Australi

    A room-temperature alternating current susceptometer - Data analysis, calibration, and test

    Full text link
    An AC susceptometer operating in the range of 10 Hz to 100 kHz and at room temperature is designed, built, calibrated and used to characterize the magnetic behaviour of coated magnetic nanoparticles. Other weakly magnetic materials (in amounts of some millilitres) can be analyzed as well. The setup makes use of a DAQ-based acquisition system in order to determine the amplitude and the phase of the sample magnetization as a function of the frequency of the driving magnetic field, which is powered by a digital waveform generator. A specific acquisition strategy makes the response directly proportional to the sample susceptibility, taking advantage of the differential nature of the coil assembly. A calibration method based on conductive samples is developed.Comment: 8 pages, 7 figures, 19 ref

    Bioelectronic technologies and artificial intelligence for medical diagnosis and healthcare

    Get PDF
    The application of electronic findings to biology and medicine has significantly impacted health and wellbeing. Recent technology advances have allowed the development of new systems that can provide diagnostic information on portable point-of-devices or smartphones. The decreasing size of electronics technologies down to the atomic scale and the advances in system, cell, and molecular biology have the potential to increase the quality and reduce the costs of healthcare. Clinicians have pervasive access to new data from complex sensors; imaging tools; and a multitude of other sources, including personal health e-records and smart environments. Humans are from being able to process this unprecedented volume of available data without advanced tools. Artificial intelligence (AI) can help clinicians to identify patterns from this huge amount of data to inform better choices for patients. In this Special Issue, some original research papers focusing on recent advances have been collected, covering novel theories, innovative methods, and meaningful applications that could potentially lead to significant advances in the field

    Performance of screening for aneuploidies by cell-free DNA analysis of maternal blood in twin pregnancies

    Get PDF
    Objectives To report clinical implementation of cell‐free DNA (cfDNA) analysis of maternal blood in screening for trisomies 21, 18 and 13 in twin pregnancies and examine variables that could influence the failure rate of the test. Methods cfDNA testing was performed in 515 twin pregnancies at 10–28 weeks' gestation. The failure rate of the test to provide results was compared with that in 1847 singleton pregnancies, and logistic regression analysis was used to determine which factors among maternal and pregnancy characteristics were significant predictors of test failure. Results Failure rate of the cfDNA test at first sampling was 1.7% in singletons and 5.6% in twins. Of those with a test result, the median fetal fraction in twins was 8.7% (range, 4.1–30.0%), which was lower than that in singletons (11.7% (range, 4.0–38.9%)). Multivariable regression analysis demonstrated that twin pregnancy, higher maternal weight and conception by in‐vitro fertilization provided significant independent prediction of test failure. Follow‐up was available in 351 (68.2%) of the twin pregnancies and comprised 334 with euploid fetuses, 12 discordant for trisomy 21 and five discordant for trisomy 18. In all 323 euploid cases with a result, the risk score for each trisomy was < 1:10 000. In 11 of the 12 cases with trisomy 21 and in the five with trisomy 18, the cfDNA test gave a high‐risk result, but in one case of trisomy 21, the score was < 1:10 000. Conclusion In twin pregnancies screening by cfDNA testing is feasible, but the failure rate is higher and detection rate may be lower than in singletons

    Blockwise Euclidean likelihood for spatio-temporal covariance models

    Get PDF
    A spatio-temporal blockwise Euclidean likelihood method for the estimation of covariance models when dealing with large spatio-temporal Gaussian data is proposed. The method uses moment conditions coming from the score of the pairwise composite likelihood. The blockwise approach guarantees considerable computational improvements over the standard pairwise composite likelihood method. In order to further speed up computation, a general purpose graphics processing unit implementation using OpenCL is implemented. The asymptotic properties of the proposed estimator are derived and the finite sample properties of this methodology by means of a simulation study highlighting the computational gains of the OpenCL graphics processing unit implementation. Finally, there is an application of the estimation method to a wind component data set

    Helac-nlo

    Full text link
    Based on the OPP technique and the HELAC framework, HELAC-1LOOP is a program that is capable of numerically evaluating QCD virtual corrections to scattering amplitudes. A detailed presentation of the algorithm is given, along with instructions to run the code and benchmark results. The program is part of the HELAC-NLO framework that allows for a complete evaluation of QCD NLO corrections.Comment: minor text revisions, version to appear in Comput.Phys.Commu

    Stein hypothesis and screening effect for covariances with compact support

    Get PDF
    In spatial statistics, the screening effect historically refers to the situation when the observations located far from the predictand receive a small (ideally, zero) kriging weight. Several factors play a crucial role in this phenomenon: among them, the spatial design, the dimension of the spatial domain where the observations are defined, the mean-square properties of the underlying random field and its covariance function or, equivalently, its spectral density. The tour de force by Michael L. Stein provides a formal definition of the screening effect and puts emphasis on the Matérn covariance function, advocated as a good covariance function to yield such an effect. Yet, it is often recommended not to use covariance functions with a compact support. This paper shows that some classes of covariance functions being compactly supported allow for a screening effect according to Stein’s definition, in both regular and irregular settings of the spatial design. Further, numerical experiments suggest that the screening effect under a class of compactly supported covariance functions is even stronger than the screening effect under a Matérn model
    corecore